In contrast to the control-theoretic methods, the lack of stability guarantee remains a significant problem for model-free reinforcement learning (RL) methods. Jointly learning a policy and a Lyapunov function has recently become a promising approach to ensuring the whole system with a stability guarantee. However, the classical Lyapunov constraints researchers introduced cannot stabilize the system during the sampling-based optimization. Therefore, we propose the Adaptive Stability Certification (ASC), making the system reach sampling-based stability. Because the ASC condition can search for the optimal policy heuristically, we design the Adaptive Lyapunov-based Actor-Critic (ALAC) algorithm based on the ASC condition. Meanwhile, our algorithm avoids the optimization problem that a variety of constraints are coupled into the objective in current approaches. When evaluated on ten robotic tasks, our method achieves lower accumulated cost and fewer stability constraint violations than previous studies.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
This technical report briefly describes our JDExplore d-team's Vega v2 submission on the SuperGLUE leaderboard. SuperGLUE is more challenging than the widely used general language understanding evaluation (GLUE) benchmark, containing eight difficult language understanding tasks, including question answering, natural language inference, word sense disambiguation, coreference resolution, and reasoning. [Method] Instead of arbitrarily increasing the size of a pretrained language model (PLM), our aim is to 1) fully extract knowledge from the input pretraining data given a certain parameter budget, e.g., 6B, and 2) effectively transfer this knowledge to downstream tasks. To achieve goal 1), we propose self-evolution learning for PLMs to wisely predict the informative tokens that should be masked, and supervise the masked language modeling (MLM) process with rectified smooth labels. For goal 2), we leverage the prompt transfer technique to improve the low-resource tasks by transferring the knowledge from the foundation model and related downstream tasks to the target task. [Results] According to our submission record (Oct. 2022), with our optimized pretraining and fine-tuning strategies, our 6B Vega method achieved new state-of-the-art performance on 4/8 tasks, sitting atop the SuperGLUE leaderboard on Oct. 8, 2022, with an average score of 91.3.
translated by 谷歌翻译
视觉模仿学习为机器人系统提供了有效,直观的解决方案,以获得新颖的操纵技巧。但是,仅凭视觉输入就可以同时学习几何任务约束,并控制政策仍然是一个具有挑战性的问题。在本文中,我们提出了一种基于关键点的视觉模仿(K-VIL)的方法,该方法会自动从少数人类演示视频中提取稀疏,以对象独立的任务表示。任务表示形式由主要歧管,其关联的本地框架以及任务执行所需的运动原始框架上的基于关键点的几何约束以及移动原始构成。我们的方法能够从单个演示视频中提取此类任务表示,并在新演示可用时会逐步更新它们。为了使用新颖的场景中学习的优先几何约束来重现操纵技能,我们介绍了一种新颖的基于Kepoint的入学控制器。我们在几个现实世界中评估了我们的方法,展示了其处理混乱的场景,新的对象的新实例以及大对象姿势和形状变化的能力,以及其一声效率和稳健性模仿学习设置。视频和源代码可在https://sites.google.com/view/k-vil上找到。
translated by 谷歌翻译
场景图生成(SGG)是一项基本任务,旨在检测图像中对象之间的视觉关系。流行的SGG方法要求在培训集中给出所有对象类。这样的封闭设置限制了SGG的实际应用。在本文中,我们介绍了开放式视频范围场景图生成,这是一种新颖,现实且具有挑战性的环境,其中模型在一组基本对象类上进行了训练,但需要推断出看不见的目标对象类的关系。为此,我们提出了一种两步方法,该方法首先对大量的粗粒区域捕获数据进行预先培训,然后利用两种基于及时的技术来验证预先训练的模型而无需更新其参数。此外,我们的方法可以支持对完全看不见的对象类的推论,而现有方法无法处理。在三个基准数据集(视觉基因组,GQA和开放图像)上进行的广泛实验,我们的方法在OV-SGG的设置以及常规的封闭SGG上明显优于最近的强大SGG方法。
translated by 谷歌翻译
时间动作本地化在视频分析中起着重要作用,该视频分析旨在将动作定位和分类在未修剪视频中。先前的方法通常可以预测单个时间尺度的特征空间上的动作。但是,低级量表的时间特征缺乏足够的语义来进行动作分类,而高级尺度则无法提供动作边界的丰富细节。为了解决这个问题,我们建议预测多个颞尺度特征空间的动作。具体而言,我们使用不同尺度的精致特征金字塔将语义从高级尺度传递到低级尺度。此外,为了建立整个视频的长时间尺度,我们使用时空变压器编码器来捕获视频帧的远程依赖性。然后,具有远距离依赖性的精制特征被送入分类器以进行粗糙的动作预测。最后,为了进一步提高预测准确性,我们建议使用框架级别的自我注意模块来完善每个动作实例的分类和边界。广泛的实验表明,所提出的方法可以超越Thumos14数据集上的最先进方法,并在ActivityNet1.3数据集上实现可比性的性能。与A2NET(tip20,avg \ {0.3:0.7 \}),sub-action(csvt2022,avg \ {0.1:0.5 \})和afsd(cvpr21,avg \ {0.3:0.7 \}) ,提出的方法分别可以提高12.6 \%,17.4 \%和2.2 \%
translated by 谷歌翻译
语义通信引起了人们的兴趣,因为它可以显着减少在不丢失关键信息的情况下要传输的数据量。大多数现有作品都探索文本的语义编码和传输,并在自然语言处理(NLP)中应用技术来解释文本的含义。在本文中,我们构想了图像数据的语义通信,这些语义数据在语义和带宽敏感方面更为丰富。我们提出了一种基于增强学习的自适应语义编码(RL-ASC)方法,该方法编码超过像素级别的图像。首先,我们定义了图像数据的语义概念,该概念包括类别,空间布置和视觉特征作为表示单元,并提出卷积语义编码器以提取语义概念。其次,我们提出了图像重建标准,该标准从传统像素的相似性演变为语义相似性和感知性能。第三,我们设计了一种基于RL的新型语义位分配模型,其奖励是用自适应量化水平编码某个语义概念后的速率语义感知性能的提高。因此,与任务相关的信息得到正确保存和重建,同时丢弃了较少重要的数据。最后,我们提出了基于生成的对抗网(GAN)的语义解码器,该语义解码器通过注意模块融合本地和全球特征。实验结果表明,所提出的RL-ASC具有噪声稳定性,可以重建视觉上令人愉悦和语义一致的图像,并节省与标准编解码器和其他基于深度学习的图像编解码器相比,可以节省位置的时间。
translated by 谷歌翻译
在本文中,我们提出了与IEEE计算机协会在CVPR 2022上同时与IEEE计算机协会研讨会同时举行的多手术检测挑战。我们的多手术检测挑战旨在检测自动图像操作,包括但不限于图像编辑,图像合成,图像合成,图像,图像,图像,图像合成,图像,图像编辑一代,图像Photoshop等。我们的挑战吸引了来自世界各地的674支团队,约有2000个有效的结果提交数量。我们邀请了前十支球队为挑战提供解决方案,其中三支球队在大结局中获得了奖项。在本文中,我们介绍了前三名团队的解决方案,以增强图像伪造检测领域的研究工作。
translated by 谷歌翻译
当前的场景图生成研究(SGG)着重于解决生成无偏见的场景图的长尾问题。但是,大多数偏见的方法都过度强调了尾巴谓词,并低估了整个训练的头部,从而破坏了头部谓词特征的表示能力。此外,这些头部谓词的受损特征会损害尾巴谓词的学习。实际上,尾巴谓词的推论在很大程度上取决于从头部谓词中学到的一般模式,例如“站在”上“依赖”。因此,这些偏见的SGG方法既不能在尾巴谓词上实现出色的性能,也不能满足头部的行为。为了解决这个问题,我们提出了一个双分支混合学习网络(DHL),以照顾SGG的头部谓词和尾巴,包括粗粒度的学习分支(CLB)和细粒度的学习分支(FLB) 。具体而言,CLB负责学习专业知识和头部谓词的鲁棒特征,而FLB有望预测信息丰富的尾巴谓词。此外,DHL配备了分支课程时间表(BCS),以使两个分支机构一起工作。实验表明,我们的方法在VG和GQA数据集上实现了新的最新性能,并在尾巴谓词和头部的性能之间进行了权衡。此外,对两个下游任务(即图像字幕和句子到刻画检索)进行了广泛的实验,进一步验证了我们方法的概括和实用性。
translated by 谷歌翻译
胃肠道内窥镜手术(GES)对仪器的大小和远端灵巧性有很高的要求,因为内窥镜通道狭窄和曲折的人类胃肠道。本文利用镍钛(NITI)电线来开发微型3-DOF(俯仰 - 翻译)柔性平行机器人手腕(FPRW)。此外,我们在手腕的连接界面上组装了一把电刀,然后对其进行了毛细管,以在猪胃中进行内窥镜粘膜下清扫术(ESD)。每个ESD工作流程中的有效性能证明了设计的FPRW具有足够的工作空间,高远端灵量和高定位精度。
translated by 谷歌翻译